Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Adversarial training method with adaptive attack strength
Tong CHEN, Jiwei WEI, Shiyuan HE, Jingkuan SONG, Yang YANG
Journal of Computer Applications    2024, 44 (1): 94-100.   DOI: 10.11772/j.issn.1001-9081.2023060854
Abstract161)   HTML5)    PDF (1227KB)(96)       Save

The vulnerability of deep neural networks to adversarial attacks has raised significant concerns about the security and reliability of artificial intelligence systems. Adversarial training is an effective approach to enhance adversarial robustness. To address the issue that existing methods adopt fixed adversarial sample generation strategies but neglect the importance of the adversarial sample generation phase for adversarial training, an adversarial training method was proposed based on adaptive attack strength. Firstly, the clean sample and the adversarial sample were input into the model to obtain the output. Then, the difference between the model outputs of the clean sample and the adversarial sample was calculated. Finally, the change of the difference compared with the previous moment was measured to automatically adjust the strength of the adversarial sample. Comprehensive experimental results on three benchmark datasets demonstrate that compared with the baseline method Adversarial Training with Projected Gradient Descent (PGD-AT), the proposed method improves the robust precision under AA (AutoAttack) attack by 1.92, 1.50 and 3.35 percentage points on three benchmark datasets, respectively, and the proposed method outperforms the state-of-the-art defense method Adversarial Training with Learnable Attack Strategy (LAS-AT) in terms of robustness and natural accuracy. Furthermore, from the perspective of data augmentation, the proposed method can effectively address the problem of diminishing augmentation effect during adversarial training.

Table and Figures | Reference | Related Articles | Metrics
Medical image privacy protection based on thumbnail encryption and distributed storage
Na ZHOU, Ming CHENG, Menglin JIA, Yang YANG
Journal of Computer Applications    2023, 43 (10): 3149-3155.   DOI: 10.11772/j.issn.1001-9081.2022111646
Abstract172)   HTML6)    PDF (4138KB)(154)       Save

With the popularity of cloud storage services and telemedicine platforms, more and more medical images are uploaded to the cloud. After being uploaded, the uploaded medical images may be leaked to unauthorized third parties, resulting in the disclosure of users’ personal privacy. Besides, if medical images are only uploaded to a single server for storage, they are vulnerable to attacks resulting in the loss of all data. To solve these problems, a medical image privacy protection algorithm based on thumbnail encryption and distributed storage was proposed. Firstly, by encrypting the thumbnail of the original medical image, the relevance of the medical images was preserved properly while achieving the encryption effect. Secondly, the double embedding method was adopted when hiding secret information, and data extraction and image recovery were performed separately to achieve Reversible Data Hiding (RDH) of the encrypted image. Finally, the distributed storage method based on polynomial shared matrix was used to generate n shares of the image and distribute them to n servers. Experimental results show that by using the encrypted thumbnail as carrier, the proposed algorithm exceeds the traditional security encryption methods on embedding rate. Even if the server is attacked, the receiver can recover the original image and private information as long as it receives no less than k shares. In the privacy protection of medical images, experiments were carried out from the aspects of anti-attack and image recovery, and the analysis results show that the proposed encryption algorithm has good performance and high security.

Table and Figures | Reference | Related Articles | Metrics
Reversible data hiding in encrypted image based on multi-objective optimization
Xiangyu ZHANG, Yang YANG, Guohui FENG, Chuan QIN
Journal of Computer Applications    2022, 42 (6): 1716-1723.   DOI: 10.11772/j.issn.1001-9081.2021061495
Abstract300)   HTML13)    PDF (1250KB)(114)       Save

Focusing on the issues that the Reserving Room Before Encryption (RRBE) embedding algorithm requires a series of pre-processing work and Vacating Room After Encryption (VRAE) embedding algorithm has less embedding space, an algorithm of reversible data hiding in encrypted image based on multi-objective optimization was proposed to improve the embedding rate as well as reducing the algorithm process and workload. In this algorithm, two representative algorithms in RRBE and VRAE were combined and used in the same carrier, and performance evaluation indicators such as the amount of information embedded, distortion of direct decryption of image, extraction error rate, and computational complexity were formulated as the optimization sub-objectives. Then, the efficiency coefficient method was used to establish a model to solve the relative optimal solution of the application ratio of the two algorithms. Experimental results show that the proposed algorithm reduces the computational complexity of using RRBE algorithm alone, enables image processing users to flexibly allocate optimization objectives according to different needs in actual application scenarios, and at the same time obtains better image quality and a satisfactory amount of information embedding.

Table and Figures | Reference | Related Articles | Metrics
Degree centrality based method for cognitive feature selection
ZHANG Xiaofei, YANG Yang, HUANG Jiajin, ZHONG Ning
Journal of Computer Applications    2021, 41 (9): 2767-2772.   DOI: 10.11772/j.issn.1001-9081.2020111794
Abstract237)      PDF (2920KB)(400)       Save
To address the uncertainty of cognitive feature selection in brain atlas, a Degree Centrality based Cognitive Feature Selection Method (DC-CFSM) was proposed. First, the Functional Brain Network (FBN) of the subjects in the cognitive experiment tasks was constructed based on the brain atlas, and the Degree Centrality (DC) of each Region Of Interest (ROI) of the FBN was calculated. Next, the difference significances of the subjects' same cortical ROI under different cognitive states during executing cognitive task were statistically compared and ranked. Finally, the Human Brain Cognitive Architecture-Area Under Curve (HBCA-AUC) values were calculated for the ranked regions of interest, and the performances of several cognitive feature selection methods were evaluated. In the experiments on functional Magnetic Resonance Imaging (fMRI) data of mental arithmetic cognitive tasks, the values of HBCA-AUC obtained by DC-CFSM on the Task Positive System (TPS), Task Negative System (TNS), and Task Support System (TSS) of the human brain cognitive architecture were 0.669 2, 0.304 0 and 0.468 5 respectively. Compared with Extremely randomized Trees (Extra Trees), Adaptive Boosting (AdaBoost), random forest, and eXtreme Gradient Boosting (XGB), the recognition rate for TPS of DC-CFSM was increased by 22.17%, 13.90%, 24.32% and 37.19% respectively, while its misrecognition rate for TNS was reduced by 20.46%, 29.70%, 44.96% and 33.39% respectively. DC-CFSM can better reflect the categories and functions of the human brain cognitive system in the selection of cognitive features of brain atlas.
Reference | Related Articles | Metrics
Loop-level speculative parallelism analysis of kernel program in TACLeBench
MENG Huiling, WANG Yaobin, LI Ling, YANG Yang, WANG Xinyi, LIU Zhiqin
Journal of Computer Applications    2021, 41 (9): 2652-2657.   DOI: 10.11772/j.issn.1001-9081.2020111792
Abstract258)      PDF (1190KB)(219)       Save
Thread-Level Speculation (TLS) technology can tap the parallel execution potential of programs and improve the utilization of multi-core resources. However, the current TACLeBench kernel benchmarks are not effectively analyzed in TLS parallelization. In response to this problem, the loop-level speculative execution analysis scheme and analysis tool were designed. With 7 representative TACLeBench kernel benchmarks selected, firstly, the initialization analysis was performed to the programs, the program hot fragments were selected to insert the loop identifier. Then, the cross-compilation was performed to these fragments, the program speculative thread and the memory address related data were recorded, and the maximun potential of the loop-level parallelism was analyzed. Finally, the program runtime characteristics (thread granularity, parallelizable coverage, dependency characteristics) and the impacts of the source code on the speedup ratio were comprehensively discussed. Experimental results show that:1) this type of programs is suitable for TLS acceleration, compared with serial execution results, under the loop structure speculative execution, the speedup ratios for most programs are above 2, and the highest speedup ratio in them can reach 20.79; 2) by using TLS to accelerate the TACLeBench kernel programs, most applications can effectively make use of 4-core to 16-core computing resources.
Reference | Related Articles | Metrics
Auditable signature scheme for blockchain based on secure multi-party
WANG Yunye, CHENG Yage, JIA Zhijuan, FU Junjun, YANG Yanyan, HE Yuchu, MA Wei
Journal of Computer Applications    2020, 40 (9): 2639-2645.   DOI: 10.11772/j.issn.1001-9081.2020010096
Abstract363)      PDF (983KB)(695)       Save
Aiming at the credibility problem, a secure multi-party blockchain auditable signature scheme was proposed. In the proposed scheme, the trust vector with timestamp was introduced, and a trust matrix composed of multi-dimensional vector groups was constructed for regularly recording the trustworthy behavior of participants, so that a credible evaluation mechanism for the participants was established. Finally, the evaluation results were stored in the blockchain as a basis for verification. On the premise of ensuring that the participants are trusted, a secure and trusted signature scheme was constructed through secret sharing technology. Security analysis shows that the proposed scheme can effectively reduce the damages brought by the malicious participants, detect the credibility of participants, and resist mobile attacks. Performance analysis shows that the proposed scheme has lower computational complexity and higher execution efficiency.
Reference | Related Articles | Metrics
Location based service location privacy protection method based on location security in augmented reality
YANG Yang, WANG Ruchuan
Journal of Computer Applications    2020, 40 (5): 1364-1368.   DOI: 10.11772/j.issn.1001-9081.2019111982
Abstract324)      PDF (542KB)(320)       Save

Rapid development of Location Based Service (LBS) and Augmented Reality (AR) technology lead to the hidden danger of user location privacy leakage. After analyzing the advantages and disadvantages of existing location privacy protection methods, a location privacy protection method was proposed based on location security. The zone security degree and the camouflage region were introduced into the method, and the zone security was defined as a metric that indicates whether a zone needs protection. The zone security degree of insecure zones (zones need to be protected) was set to 1 while that of secure zones (zones not need to be protected) was set to 0. And the location security degree was calculated by expanding zone security degree and recognition levels. Experimental results show that, compared with the method without introducing location security, this method can reduce average location error and enhance average security, therefore effectively protecting the user location privacy and increasing the service quality of LBS.

Reference | Related Articles | Metrics
Greedy binary lion swarm optimization algorithm for solving multidimensional knapsack problem
YANG Yan, LIU Shengjian, ZHOU Yongquan
Journal of Computer Applications    2020, 40 (5): 1291-1294.   DOI: 10.11772/j.issn.1001-9081.2019091638
Abstract741)      PDF (537KB)(465)       Save

The Multidimensional Knapsack Problem (MKP) is a kind of typical multi-constraint combinatorial optimization problems. In order to solve this problem, a Greedy Binary Lion Swarm Optimization (GBLSO) algorithm was proposed. Firstly, with the help of binary code transform formula, the locations of lion individuals were discretized to obtain the binary lion swarm algorithm. Secondly, the inverse moving operator was introduced to update the location of lion king and redefine the locations of the lionesses and lion cubs. Thirdly, the greedy algorithm was fully utilized to make the solution feasible, so as to enhance the local search ability and speed up the convergence. Finally, Simulations on 10 typical MKP examples were carried out to compare GBLSO algorithm with Discrete binary Particle Swarm Optimization (DPSO) algorithm and Binary Bat Algorithm (BBA). The experimental results show that GBLSO algorithm is an effective new method for solving MKP and has good convergence efficiency, high optimization accuracy and good robustness in solving MKP.

Reference | Related Articles | Metrics
Early diagnosis and prediction of Parkinson's disease based on clustering medical text data
ZHANG Xiaobo, YANG Yan, LI Tianrui, LU Fan, PENG Lilan
Journal of Computer Applications    2020, 40 (10): 3088-3094.   DOI: 10.11772/j.issn.1001-9081.2020030359
Abstract411)      PDF (1270KB)(826)       Save
In view of the problem of the early intelligent diagnosis for Parkinson's Disease (PD) which occurs more common in the elderly, the clustering technologies based on medical detection text information data were proposed for the analysis and prediction of PD. Firstly, the original dataset was pre-processed to obtain effective feature information, and these features were respectively reduced to eight dimensional spaces with different dimensions by Principal Component Analysis (PCA) method. Then, five traditional classical clustering models and three different clustering ensemble methods were respectively used to cluster the data of eight dimensional spaces. Finally, four clustering performance indexes were selected to predict PD subject with dopamine deficiency as well as healthy control and Scans Without Evidence of Dopamine Deficiency (SWEDD) PD subject. The simulation results show that the clustering accuracy of Gaussian Mixture Model (GMM) reaches 89.12% when the value of PCA feature dimension is 30, the clustering accuracy of Spectral Clustering (SC) is 61.41% when the PCA feature dimension value is 70, and the clustering accuracy of Meta-CLustering Algorithm (MCLA) achieves 59.62% when the PCA feature dimension value is 80. The comparative experiments results show that GMM has the best clustering effect in the five classical clustering methods when the PCA feature dimension value is less than 40 and MCLA has the excellent clustering performance among the three clustering ensemble methods for different feature dimensions, which thereby provides the technical and theoretical supports for the early intelligent auxiliary diagnosis of PD.
Reference | Related Articles | Metrics
Low complexity narrowband physical downlink control channel blind detection algorithm based on correlation detection
WANG Dan, LI Anyi, YANG Yanjuan
Journal of Computer Applications    2019, 39 (9): 2652-2657.   DOI: 10.11772/j.issn.1001-9081.2019020262
Abstract622)      PDF (1016KB)(307)       Save

In NarrowBand Internet of Things (NB-IoT) systems, the Internet of Things (IoT) terminals should decode Downlink Control Information (DCI) quickly to receive resource allocation and scheduling information of the data channel correctly. Therefore, a low complexity Narrowband Physical Downlink Control Channel (NPDCCH) blind detection algorithm using correlation detection was proposed for NPDCCH with search space size being greater than or equal to 32. By employing two correlation judgments on the data in a possible minimum repetition transmission unit of NPDCCH, the invalid data in searching space was removed to reduce the computation complexity. Then, the repetition periods with the valid data were combined and decoded to improve the blind detection performance. Finally, theoretical and simulation analysis of two correlation thresholds used in correlation detection were carried out. Results show that compared with conventional exhaustive blind detection algorithm, the decoding complexity of the proposed algorithm is reduced by at least 75% and the detection performance gain is increased by 2.5 dB to 3.5 dB. The proposed algorithm is more beneficial for engineering practice.

Reference | Related Articles | Metrics
Link prediction algorithm based on high-order proximity approximation
YANG Yanlin, YE Zhonglin, ZHAO Haixing, MENG Lei
Journal of Computer Applications    2019, 39 (8): 2366-2373.   DOI: 10.11772/j.issn.1001-9081.2019010213
Abstract577)      PDF (1295KB)(297)       Save
Most of the existing link prediction algorithms only study the first-order similarity between nodes and their neighbor nodes, without considering the high-order similarity between nodes and the neighbor nodes of their neighbor nodes. In order to solve this problem, a Link Prediction algorithm based on High-Order Proximity Approximation (LP-HOPA) was proposed. Firstly, the normalized adjacency matrix and similarity matrix of a network were solved. Secondly, the similarity matrix was decomposed by the method of matrix decomposition, and the representation vectors of the network nodes and their contexts were obtained. Thirdly, the original similarity matrix was high-order optimized by using Network Embedding Update (NEU) algorithm of high-order network representation learning, and the higher-order similarity matrix representation was calculated by using the normalized adjacency matrix. Finally, a large number of experiments were carried out on four real datasets. Experiments results show that, compared with the original link prediction algorithm, the accuracy of most of the link prediction algorithms optimized by LP-HOPA is improved by 4% to 50%. In addition, LP-HOPA can transform the link prediction algorithm based on local structure information of low-order network into the link prediction algorithm based on high-order characteristics of nodes, which confirms the validity and feasibility of the link prediction algorithm based on high order proximity approximation to a certain extent.
Reference | Related Articles | Metrics
Greedy core acceleration dynamic programming algorithm for solving discounted {0-1} knapsack problem
SHI Wenxu, YANG Yang, BAO Shengli
Journal of Computer Applications    2019, 39 (7): 1912-1917.   DOI: 10.11772/j.issn.1001-9081.2018112393
Abstract664)      PDF (860KB)(367)       Save

As the existing dynamic programming algorithm cannot quickly solve Discounted {0-1} Knapsack Problem (D{0-1}KP), based on the idea of dynamic programming and combined with New Greedy Repair Optimization Algorithm (NGROA) and core algorithm, a Greedy Core Acceleration Dynamic Programming (GCADP) algorithm was proposed with the acceleration of the problem solving by reducing the problem scale. Firstly, the incomplete item was obtained based on the greedy solution of the problem by NGROA. Then, the radius and range of fuzzy core interval were found by calculation. Finally, Basic Dynamic Programming (BDP) algorithm was used to solve the items in the fuzzy core interval and the items in the same item set. The experimental results show that GCADP algorithm is suitable for solving D{0-1}KP. Meanwhile, the average solution speed of GCADP improves by 76.24% and 75.07% respectively compared with that of BDP algorithm and FirEGA (First Elitist reservation strategy Genetic Algorithm).

Reference | Related Articles | Metrics
Robust multi-manifold discriminant local graph embedding based on maximum margin criterion
YANG Yang, WANG Zhengqun, XU Chunlin, YAN Chen, JU Ling
Journal of Computer Applications    2019, 39 (5): 1453-1458.   DOI: 10.11772/j.issn.1001-9081.2018102113
Abstract394)      PDF (900KB)(261)       Save
In most existing multi-manifold face recognition algorithms, the original data with noise are directly processed, but the noisy data often have a negative impact on the accuracy of the algorithm. In order to solve the problem, a Robust Multi-Manifold Discriminant Local Graph Embedding algorithm based on the Maximum Margin Criterion (RMMDLGE/MMC) was proposed. Firstly, a denoising projection was introduced to process the original data for iterative noise reduction, and the purer data were extracted. Secondly, the data image was divided into blocks and a multi-manifold model was established. Thirdly, combined with the idea of maximum margin criterion, an optimal projection matrix was sought to maximize the sample distances on different manifolds while to minimize the sample distances on the same manifold. Finally, the distance from the test sample manifold to the training sample manifold was calculated for classification and identification. The experimental results show that, compared with Multi-Manifold Local Graph Embedding algorithm based on the Maximum Margin Criterion (MLGE/MMC) which performs well, the classification recognition rate of the proposed algorithm is improved by 1.04, 1.28 and 2.13 percentage points respectively on ORL, Yale and FERET database with noise and the classification effect is obviously improved.
Reference | Related Articles | Metrics
Ultra-wideband channel environment classification algorithm based on CNN
YANG Yanan, XIA Bin, ZHAO Lei, YUAN Wenhao
Journal of Computer Applications    2019, 39 (5): 1421-1424.   DOI: 10.11772/j.issn.1001-9081.2018071516
Abstract364)      PDF (561KB)(246)       Save
To solve the problem that Non Line Of Sight (NLOS) state identification requires classification of known channel types, a channel environment classification algorithm based on Convolutional Neural Network (CNN) was proposed. Firstly, an Ultra-WideBand (UWB) channel was sampled, and a sample set was constructed. Then, a CNN was trained by the sample set to extract features of different channel scenes. Finally, the classification of UWB channel environment was realized. The experimental results show that the overall accuracy of the model using the proposed algorithm is about 93.40% and the algorithm can effectively realize the classification of channel environments.
Reference | Related Articles | Metrics
New simplified model of discounted {0-1} knapsack problem and solution by genetic algorithm
YANG Yang, PAN Dazhi, LIU Yi, TAN Dailun
Journal of Computer Applications    2019, 39 (3): 656-662.   DOI: 10.11772/j.issn.1001-9081.2018071580
Abstract579)      PDF (1164KB)(362)       Save
Current Discounted {0-1} Knapsack Problem (D{0-1}KP) model takes the discounted relationship as a new individual, so the repair method must be adopted in the solving process to repair the individual coding, making the model have less solving methods. In order to solve the problem of single solving method, by changing the binary code expression in the model, an expression method with discounted relationship out of individual code was proposed. Firstly, if and only if each involved individual encoding value was one (which means the product was one), the discounted relationship was established. According to this setting, a Simplified Discounted {0-1} Knapsack Problem (SD{0-1}KP) model was established. Then, an improved genetic algorithm-FG (First Gentic algorithm) was proposed based on Elitist Reservation Strategy (EGA) and GREedy strategy (GRE) for SD{0-1}KP model. Finally, combining penalty function method, a high precision penalty function method-SG (Second Genetic algorithm) for SD{0-1}KP was proposed. The results show that the SD{0-1}KP model can fully cover the problem domain of D{0-1}KP. Compared with FirEGA (First Elitist reservation strategy Genetic Algorithm), the two algorithms proposed have obvious advantages in solving speed. And SG algorithm introduces the penalty function method for the first time, which enriches the solving methods of the problem.
Reference | Related Articles | Metrics
Multi-attribute decision making method based on Pythagorean fuzzy Frank operator
PENG Dinghong, YANG Yang
Journal of Computer Applications    2019, 39 (2): 316-322.   DOI: 10.11772/j.issn.1001-9081.2018061195
Abstract672)      PDF (888KB)(363)       Save
To solve the multi-attribute decision making problems in Pythagorean fuzzy environment, a multi-attribute decision making method based on Pythagorean fuzzy Frank operator was proposed. Firstly, Pythagorean fuzzy number and Frank operator were combined to obtain the operation rule based on Frank operator. Then the Pythagorean fuzzy Frank operator was proposed, including Pythagorean fuzzy Frank weighted average operator and Pythagorean fuzzy Frank weighted geometric operator, and the properties of these operators were discussed. Finally, a multi-attribute decision making method based on Pythagorean fuzzy Frank operator was proposed, which was applied to an example of green supplier selection. The example analysis shows that the proposed method can be used to solve the actual multi-attribute decision making problems, and can be further applied to areas such as risk management and artificial intelligence.
Reference | Related Articles | Metrics
Siamese detection network based real-time video tracking algorithm
DENG Yang, XIE Ning, YANG Yang
Journal of Computer Applications    2019, 39 (12): 3440-3444.   DOI: 10.11772/j.issn.1001-9081.2019081427
Abstract506)      PDF (787KB)(393)       Save
Currently, in the field of video tracking, the typical Siamese network based algorithms only locate the center point of target, which results in poor locating performance on fast-deformation objects. Therefore, a real-time video tracking algorithm based on Siamese detection network called Siamese-FC Region-convolutional neural network (SiamRFC) was proposed. SiamRFC can directly predict the center position of the target, thus dealing with the rapid deformation. Firstly, the position of the center point of the target was obtained by judging the similarity. Then, the idea of object detection was used to return the optimal position by selecting a series of candidate boxes. Experimental results show that SiamRFC has good performance on the VOT2015|16|17 test sets.
Reference | Related Articles | Metrics
Reversible data hiding method based on texture partition for medical images
CAI Xue, YANG Yang, XIAO Xingxing
Journal of Computer Applications    2018, 38 (8): 2293-2300.   DOI: 10.11772/j.issn.1001-9081.2017122885
Abstract482)      PDF (1397KB)(350)       Save
To solve the problem that contrast enhancement effect is affected by the embedding rate in most existing Reversible Data Hiding (RDH) algorithms, a new RDH method based on texture partition for medical images was proposed. Firstly, the contrast of an image was stretched to enhance image contrast, and then according to the characteristics of medical image texture, the medical image was divided into high and low texture levels. The key partion of the medical image mainly had high texture level. To enhance the contrast of high texture level further and guarantee the infomation embedding capacity, different embedding processes were adopted for high and low texture levels. In order to compare the effect of contrast enhancement between the proposed method and other RDH algorithms for medical images, No-Reference Contrast-Distorted Images Quality Assessment (NR-CDIQA) was adopted as the evaluation standards. The experimental results show that the marked images processed by the proposed method can get better NR-CDIQA and contrst enhancement in different embedding rate.
Reference | Related Articles | Metrics
Six-legged robot path planning algorithm for unknown map
YANG Yang, TONG Dongbing, CHEN Qiaoyu
Journal of Computer Applications    2018, 38 (6): 1809-1813.   DOI: 10.11772/j.issn.1001-9081.2017112671
Abstract434)      PDF (830KB)(334)       Save
The global map cannot be accurately known in the path planning of mobile robots. In order to solve the problem, a local path planning algorithm based on fuzzy rules and artificial potential field method was proposed. Firstly, the ranging group and fuzzy rules were used to classify the shape of obstacles and construct the local maps. Secondly, a modified repulsive force function was introduced in the artificial potential field method. Based on the local maps, the local path planning was performed by using the artificial potential field method. Finally, with the movement of robot, time breakpoints were set to reduce path oscillation. For the maps of random obstacles and bumpy obstacles, the traditional artificial potential field method and the improved artificial potential field method were respectively used for simulation. The experimental results show that, in the case of random obstacles, compared with the traditional artificial potential field method, the improved artificial potential field method can significantly reduce the collision of obstacles; in the case of bumpy obstacles, the improved artificial potential field method can successfully complete the goal of path planning. The proposed algorithm is adaptable to terrain changes, and can realize the path planning of six-legged robot under unknown maps.
Reference | Related Articles | Metrics
Analysis and design of uplink resource scheduling in narrow band Internet of things
CHEN Fatang, XING Pingping, YANG Yanjuan
Journal of Computer Applications    2018, 38 (11): 3270-3274.   DOI: 10.11772/j.issn.1001-9081.2018040849
Abstract468)      PDF (942KB)(460)       Save
Narrow Band Internet of Things (NB-IoT) technology is developing rapidly. Compared with the original wireless communication, the spectrum bandwidth of NB-IoT is only 180 kHz. Therefore, how to use resources or spectrum more efficiently (ie. resource allocation and scheduling) becomes a key issue for NB-IoT technology. In order to solve this problem, the related factors of NB-IoT uplink resource scheduling were analyzed, including resource allocation, power control and uplink transmission gap, and different options for comparison to select the optimal scheme were provided. In addition, modulation and coding scheme and the selection of the number of repeated transmissions were also analyzed in detail. A greedy-stable selection modulation and coding strategy based on different coverage levels and power headroom report were proposed, with which modulation and coding level was initially selected. A compensation factor was introduced to select the number of retransmissions and the update of the modulation and coding level. Finally, the proposed scheme was simulated. The simulation results show that the proposed scheme can save more than 56% of the activity time and 46% of the resource consumption compared with the direct transmission method.
Reference | Related Articles | Metrics
Stipend prediction based on enhanced-discriminative canonical correlations analysis and classification ensemble
ZHANG Fangjuan, YANG Yan, DU Shengdong
Journal of Computer Applications    2018, 38 (11): 3150-3155.   DOI: 10.11772/j.issn.1001-9081.2018041259
Abstract348)      PDF (893KB)(351)       Save
For low efficiency and high workload of higher education institution's stipend management, an algorithm of Enhanced-Discriminative Canonical Correlations Analysis (EN-DCCA) was proposed, and the method of classification ensemble was combined to predict the stipend of undergraduates. The multi-dimensional data of undergraduates at school were divided into two different views. The existing multi-view discriminative canonical correlation analysis algorithms do not consider both the correlation between view categories and the discrimination of view's combination features. The optimization goal of EN-DCCA was to minimize inter-class correlation while maximizing intra-class correlation and considered the discrimination of view's combination features, which further enhanced the performance of attribute identification and was more conducive to classification prediction. The process of undergraduates' stipend prediction is as follows:firstly, according to undergraduates' learning behavior and living behavior at school, the data was preprocessed as two different views. Then, the two views were learned by EN-DCCA. Finally, the classification ensemble was used to complete the prediction. Experimented on a real data set, the prediction accuracy of the proposed method reached 90.01%, which was 2 percentage points higher than that of Combined-feature-discriminability Enhanced Canonical Correlation Analysis (CECCA) method. The experimental results show that the proposed method can effectively achieve the stipend prediction for higher education institutions.
Reference | Related Articles | Metrics
Improved dark channel prior dehazing algorithm combined with atmospheric light and transmission
CHEN Gaoke, YANG Yan, ZHANG Baoshan
Journal of Computer Applications    2017, 37 (5): 1481-1484.   DOI: 10.11772/j.issn.1001-9081.2017.05.1481
Abstract629)      PDF (851KB)(426)       Save
Since the dark channel prior transmission and atmospheric light in the bright region are poorly estimated, an improved dehazing algorithm combined with atmospheric light and transmission was proposed. On the basis of analysis of the characteristics of Gaussian function, a preliminary transmission was estimated through the Gaussian function of dark channel prior of a fog image, and the maximum and minimum operations were used to eliminate the block effect. Next, the atmospheric light was obtained by atmospheric light description area, which was acquired by halo operation and morphological dilation operation. Finally, a clear image could be reconstructed according to the atmospheric scattering model. The experimental results show that the proposed algorithm can effectively remove the fog from the image and the recovered effect of thick fog is better than the comparison algorithms, such as dark channel prior, meanwhile the algorithm has a faster processing speed and is suitable for real-time applications.
Reference | Related Articles | Metrics
Single image in-depth dehazing algorithm based on optimization of guided image
DONG Yufei, YANG Yan, CAO Biting
Journal of Computer Applications    2017, 37 (1): 268-272.   DOI: 10.11772/j.issn.1001-9081.2017.01.0268
Abstract673)      PDF (1081KB)(414)       Save
Aiming at the quality loss problems such as degradation in contrast and color distortion of image captured in haze and fog weather conditions, a single image in-depth dehazing algorithm based on optimization of the guided image was proposed. The local mean and standard deviation of the image were adopted to optimize the guided image on the basis of analysing the character of atmospheric veil. Then, the guided image was further filtered by using the dual zone filtering to get smooth and sharp-edged guided image. The atmospheric veil was estimated through the fast guided filtering. At last, a clear image would be recovered based on the atmospheric scattering physical model. The experimental results show that the recovered image is clear and natural, and rich in details. Its close view is dehazed completely, while the dehazing of its distant view is improved greatly. The proposed algorithm achieves good results where the depth of the field has a sudden saltation and improved the visibility and robustness of outdoor vision system.
Reference | Related Articles | Metrics
Fast image dehazing algorithm based on relative transmittance estimation
YANG Yan, WANG Fan, BAI Haiping
Journal of Computer Applications    2016, 36 (3): 806-810.   DOI: 10.11772/j.issn.1001-9081.2016.03.806
Abstract589)      PDF (904KB)(358)       Save
Since the dark channel prior algorithm has dull restoration effect and too long processing time, a fast dehazing algorithm for single image based on relative transmittance estimation was proposed. On the basis of the analysis of relationship between field depth under haze condition and minimum image of color channel (RGB) images, a preliminary transmittance was estimated through the relative amount of field depth, and then it was adjusted with an improved mean filter. At last, the clear image could be recovered by the atmospheric scattering model and the brightness was enhanced to improve its visual effects. The estimation of transmittance in this paper is simple and effective, the restored images are clear and natural, and have high detail visibility and scenery layering. The experimental results show that the proposed algorithm has great improvement in image dehazing quality and computational time, which is propitious to achieve real-time application.
Reference | Related Articles | Metrics
Data analysis method for parallel DHP based on Hadoop
YANG Yanxia, FENG Lin
Journal of Computer Applications    2016, 36 (12): 3280-3284.   DOI: 10.11772/j.issn.1001-9081.2016.12.3280
Abstract624)      PDF (830KB)(385)       Save
It is a bottleneck of Apriori algorithm for mining association rules that the candidate set C 2 is used to generate the frequent 2-item set L 2. In the Direct Hashing and Pruning (DHP) algorithm, a generated Hash table H 2 is used to delete the unused candidate item sets in C 2 for improving the efficiency of generating L 2. However,the traditional DHP is a serial algorithm, which cannot effectively deal with large scale data. In order to solve the problem, a DHP parallel algorithm, termed H_DHP algorithm, was proposed. First, the feasibility of parallel strategy in DHP was analyzed and proved theoretically. Then, the generation method for the Hash table H 2 and frequent item sets L 1, L 3- L k was developed in parallel based on Hadoop, and the association rules were generated by Hbase database. The simulation experimental results show that, compared with the DHP algorithm, the H_DHP algorithm has better performance in the processing efficiency of data, the size of the data set, the speedup and scalability.
Reference | Related Articles | Metrics
Data preprocessing based recovery model in wireless meteorological sensor network
WANG Jun, YANG Yang, CHENG Yong
Journal of Computer Applications    2016, 36 (10): 2647-2652.   DOI: 10.11772/j.issn.1001-9081.2016.10.2647
Abstract674)      PDF (1082KB)(693)       Save
To solve the problem of excessive communication energy consumption caused by large number of sensor nodes and high redundant sensor data in wireless meteorological sensor network, a Data Preprocessing Model based on Joint Sparsity (DPMJS) was proposed. By combining the meteorological forecast value with every cluster head's value in Wireless Sensor Network (WSN), DPMJS was used to compute a common portions to process sensor data. A data collection framework based on distributed compressed sensing was also applied to reduce data transmission and balance energy consumption in cluster network; data measured in common nodes was recovered in sink node, so as to reduce data communication radically. A suitable method to sparse the abnormal data was also designed. In simulation, using DPMJS can enhance the data sparsity by exploiting spatio-temporal correlation efficiently, and improve data recovery rate by 25%; compared with compressed sensing, data recovery rate is improved by 46%; meanwhile, the abnormal data processing can recovery data successfully by high probability of 96%. Experimental results indicate that the proposed data preprocessing model can increase efficiency of data recovery, reduce the amount of transmission significantly, and prolong the network lifetime.
Reference | Related Articles | Metrics
Contrast restoration algorithm for single image based on physicals model
WANG Fan, YANG Yan, BAI Haiping
Journal of Computer Applications    2015, 35 (8): 2291-2294.   DOI: 10.11772/j.issn.1001-9081.2015.08.2291
Abstract485)      PDF (912KB)(345)       Save

Concerning that the parameter estimation in defogging algorithms based on image restoration is easy to cause the loss of scene information, a new defogging algorithm for single image was proposed. On the basis of the dark channel prior method, the atmospheric scattering model was analyzed and then the influence to dark channel image caused by fog distribution was summarized, which is the basis for adding fog to the outdoor images. The transmittance was estimated through the field depth relationship between the fog added reference image and the outdoor image to defogging. The algorithm used physical model and multiple images to complete the estimation of relevant parameters and had a better result in retaining scene information. The experimental results show that the proposed algorithm is more effective than the comparison algorithms, and its processing speed is also improved significantly.

Reference | Related Articles | Metrics
Short question classification based on semantic extensions
YE Zhonglin, YANG Yan, JIA Zhen, YIN Hongfeng
Journal of Computer Applications    2015, 35 (3): 792-796.   DOI: 10.11772/j.issn.1001-9081.2015.03.792
Abstract566)      PDF (789KB)(556)       Save

Question classification is one of the tasks in question answering system. Since questions often have rare words and colloquial expressions, especially in the application of voice interaction, the traditional text classifications perform poorly in short question classification. Thus a short question classification algorithm was proposed, which was based on semantic extensions and used the search engine to extend knowledge for short questions, the question's category was got by selecting features with the topic model and calculating the word similarity. The experimental results show that the proposed method can get F-measure value of 0.713 in a set of 1365 real problems, which is higher than that of Support Vector Machine (SVM), K-Nearest Neighbor (KNN) algorithm and maximum entropy algorithm. Therefore, the accuracy of the question classification can be improved by above method in question answering system.

Reference | Related Articles | Metrics
Fault diagnosis method of high-speed rail based on compute unified device architecture
CHEN Zhi, LI Tianrui, LI Ming, YANG Yan
Journal of Computer Applications    2015, 35 (10): 2819-2823.   DOI: 10.11772/j.issn.1001-9081.2015.10.2819
Abstract409)      PDF (703KB)(406)       Save
Concerning the problem that traditional fault diagnosis of High-Speed Rail (HSR) vibration signal is slow and cannot meet the actual requirement of real-time processing, an accelerated fault diagnosis method for HSR vibration signal was proposed based on Compute Unified Device Architecture (CUDA). First, the data of HSR was processed by Empirical Mode Decomposition (EMD) based on CUDA, then the fuzzy entropy of each result component was calculated. Finally, K-Nearest Neighbor (KNN) classification algorithm was used to classify feature space which consisted of multiple fuzzy entropy features. The experimental results show that the proposed method is efficient on fault classification of HSR vibration signal, and the processing speed is significantly improved compared with the traditional method.
Reference | Related Articles | Metrics
Real-time fault-tolerant technology for Hadoop based on heartbeat expired time mechanism
GUAN Guodong, TENG Fei, YANG Yan
Journal of Computer Applications    2015, 35 (10): 2784-2788.   DOI: 10.11772/j.issn.1001-9081.2015.10.2784
Abstract471)      PDF (754KB)(385)       Save
The heartbeat mechanism in Hadoop is not reasonable for short jobs, and ignores the fairness of expired time set of nodes in heterogeneous cluster. In order to overcome the problem, a fair expired time fault-tolerant mechanism was proposed. First of all, a failure misjudgement loss model and a Fair MisJudgment Loss (FMJL) algorithm were put forward according to reliability and computational performance of nodes, so as to meet requirements of the long jobs and short jobs at the same time. Then a fair expired time mechanism based on FMJL algorithm was designed and implemented. Running a 345 seconds short job on the Hadoop with the proposed fair expired time mechanism, the results showed that it saved completion time by 44% when there was fault on TaskTracker nodes, and saved completion time by 23% compared with self-adaptation expired time mechanism. The experimental results show that the proposed fair expired time mechanism shortens the fault-tolerant processing time without affecting the completion time of long jobs, and can improve the efficiency of real-time processing ability for a heterogeneous Hadoop cluster.
Reference | Related Articles | Metrics